Selective Ensemble under Regularization Framework
نویسندگان
چکیده
An ensemble is generated by training multiple component learners for a same task and then combining them for predictions. It is known that when lots of trained learners are available, it is better to ensemble some instead of all of them. The selection, however, is generally difficult and heuristics are often used. In this paper, we investigate the problem under the regularization framework, and propose a regularized selective ensemble algorithm RSE. In RSE, the selection is reduced to a quadratic programming problem, which has a sparse solution and can be solved efficiently. Since it naturally fits the semi-supervised learning setting, RSE can also exploit unlabeled data to improve the performance. Experimental results show that RSE can generate ensembles with small size but strong generalization ability.
منابع مشابه
Max-Margin Stacking and Sparse Regularization for Linear Classifier Combination and Selection
The main principle of stacked generalization (or Stacking) is using a second-level generalizer to combine the outputs of base classifiers in an ensemble. In this paper, we investigate different combination types under the stacking framework; namely weighted sum (WS), class-dependent weighted sum (CWS) and linear stacked generalization (LSG). For learning the weights, we propose using regularize...
متن کاملSelective ensemble of SVDDs with Renyi entropy based diversity measure
In this paper, a novel selective ensemble strategy for support vector data description (SVDD) using the Renyi entropy based diversity measure is proposed to deal with the problem of one-class classification. In order to obtain compact classification boundary, the radius of ensemble is defined as the inner product of the vector of combination weights and the vector of the radii of SVDDs. To make...
متن کاملTree based ensemble models regularization by convex optimization
Tree based ensemble methods can be seen as a way to learn a kernel from a sample of input-output pairs. This paper proposes a regularization framework to incorporate non-standard information not used in the kernel learning algorithm, so as to take advantage of incomplete information about output values and/or of some prior information about the problem at hand. To this end a generic convex opti...
متن کاملStudies of model selection and regularization for generalization in neural networks with applications
This thesis investigates the generalization problem in artificial neural networks, attacking it from two major approaches: regularization and model selection. On the regularization side, under the framework of Kullback–Leibler divergence for feedforward neural networks, we develop a new formula for the regularization parameter in Gaussian density kernel estimation based on available training da...
متن کاملDiversity Regularized Ensemble Pruning
Diversity among individual classifiers is recognized to play a key role in ensemble, however, few theoretical properties are known for classification. In this paper, by focusing on the popular ensemble pruning setting (i.e., combining classifier by voting and measuring diversity in pairwise manner), we present a theoretical study on the effect of diversity on the generalization performance of v...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009